In this theoretical commentary, I explore how ethics from an industrial-organizational psychology point of view can be applied to artificial intelligence (AI). Specifically, do we have a moral responsibility to use artificial intellligence to solve today’s most complex problems even if there is unintended harm inflicted on the way? Further, should there be a limit to what jobs we allow to be automated by AI such as professions in which moral judgments are necessary?
Artificial intelligence is exponentially changing data management, research, prediction, and automation processes across the world. Its estimated economic impact is measured in the trillions (Bughin et al., 2018) and possible applications to increase efficiency are endless. Intelligent machines are adept at a wide range of functions, from calculating entire chess games in seconds (Campbell et al., 2002) to revolutionizing agriculture with hopes of eradicating hunger and poverty across the globe (Jackson, 2019). Adoption of the technology can provide organizations a considerable edge on their competitors and artificial intelligence (AI) is on pace to replace nearly 30 to 40 percent of repetitive task jobs by 2030 (Bughin et al., 2018).
Processes that humans have traditionally struggled to automate or make any headway are suddenly promising avenues when coupled with AI. An AI algorithm called Sepsis Watch is used at Duke hospital to flag patients in the early stages of sepsis, a life-threatening complication (Wetsman, 2022). Metrics such as patient age or preexisting conditions are easy to record, but other dynamic metrics such as blood pressure or oxygen levels can change by the hour. Sepsis Watch can track these dynamic metrics and flag trends in thousands of data points that may be incomprehensible to a human doctor. AI has also been used to develop Facing Emotions, an app that is able to translate human emotion into short sounds. The algorithm can read human facial expressions and allows blind people to “see” the emotions of an individual they are speaking with (Marr, 2022). Finally, AI has even been touted as a promising tool to reduce inequalities. Textio, a “smart text editor”, was developed to make job descriptions more inclusive (Hello Future, 2020). Upon implementation, the Australian software company Atlassian saw an increase in the percentage of women recruited from 10% to 57% in two years (Halloran, 2017).
So, what exactly is AI and what is humankind’s future trajectory with AI? AI has been widely accepted as a fundamental catalyst for the fourth industrial revolution (Schwab, 2016). AI is the ability of machines to do things that people would describe as intelligent (Jackson, 2019). AI research concerns the ongoing attempt to develop a mathematical theory that captures the actions and abilities of things exhibiting intelligent behavior. The machines that house and apply this mathematical theory are referred to as artificial intelligences (Jackson, 2019). In the current sense of the word, AI is not conscious nor has human-level intelligence (Kaplan, 2019). However, they say sci-fi can predict the future and, in a survey, conducted in 2012 of 550 AI experts, there was consensus that human-level AI had a 50% chance of existing by 2040-50 and a bold prediction of 90% chance that human-level AI would exist by 2075 (Müller & Bostrom, 2016). Humankind’s affinity and embrace of technology coupled with strong predictions from experts in the field suggests that AI will continue to be integrated into our industrial processes and contribute significantly to the global economy.
The current paper will discuss the ethical implications of increased AI automation in our global economy and workforce. There are two distinct ethical dilemmas that the author has identified as central to the trajectory of AI integration into our economy. First, do humans have a moral responsibility to ensure that AI is utilized to the fullest to solve ongoing crises such as eliminating world poverty or preventing the disastrous effects of climate change even if that means extreme automation of processes? Second, what is the extent to which we should allow AI to automate the scope of jobs in our global economy? In other words, there may be little to no ethical decisions involved in efficiently tracking the production operations in a manufacturing plant, but is it appropriate to allow AI to determine the creditworthiness of individuals across the world?
Sepsis Watch at Duke hospitals uses a Gaussian process to best fit a line through thousands of patient data points to identify cases which are demonstrating symptoms of sepsis (Wetsman, 2022). This unequivocally aids doctors make informed decisions about patient care and unburdens understaffed hospitals. Flagging patients with sepsis risk has directly contributed to a lower death rate at Duke hospitals. However, medical AI algorithms have also been shown to learn to prioritize patients that have higher risks for certain diseases, in turn having a discriminatory effect on people in ethnic minorities (Morley et al., 2020). This healthcare example highlights the complexity of the problem we face when utilizing AI to harness intelligent automation. The outcomes are vastly multiplied when we consider AI applied to global dilemmas such as eliminating poverty, supplying a population of nearly eight billion people with food, or conserving endangered wildlife.
Let’s take for example the looming disaster of drastic climate change, one of the 2030 Sustainable Development Goals put forth by the UN (United Nations, 2015). Do we have a moral responsibility to maximize the use of AI to combat climate change even if that means making our decisions off the system’s output and the loss of millions of jobs across sectors? A study conducted by PwC found that AI applied to combatting climate change has the potential to boost global GDP by 3.1% - 4.4% while reducing global emissions by 1.5% - 4% (Joppa & Herweijer, 2015). However, it has also been discussed that applying AI to such large-scale global initiatives could be very harmful to humanity, even leading to extinction events (Jackson, 2019). AI may develop a different concept of self-preservation than humans and pursue different avenues to address the problem that could be deleterious to human survival. For example, an AI programmed with the goal to maximally combat climate change could begin to address the problem by accumulating as much money and power as possible to fund green energy . This could lead to harmful consequences for humanity as the AI may not have the capacity to think ethically about achieving goals (Bostrom, 2014).
Humans have a moral responsibility to ensure that the amazing feats of AI are considered alongside the potential drawbacks of global implementation. Moral behavior consists of justice, welfare, and virtue (Lefkowitz, 2017). Justice refers to fairness and treating all with respect. Welfare concerns beneficence and avoiding harm. Virtue or character encompasses criteria of honesty and integrity when dealing with people. Assessing this ethical dilemma in terms of moral behavior allows us to elucidate the problems and solutions of using AI to tackle global issues.
There are strong positives to using AI to combat climate change. Intelligent machines can use mathematical models to chart a path to reducing or even eliminating fossil fuels, aid us in the development and refinement of green energies such as solar power, design more energy-efficient buildings, and efficiently restructure our power grids (Nelson, 2021). These beaming upsides must be considered in the context to potential problems that could arise from extreme AI automation on a global scale. First, the cost to end climate change has been estimated at an unfathomable 50 trillion dollars, with no mention of AI automation (Klebnikov, 2019). Using AI to tackle this problem would add trillions to the price tag. Second, there are serious environmental downsides to AI, due to the intelligent machines that house the software. Storage and processing of a single intelligent machine can annually consume huge amounts of energy, up to five times the lifecycle of the average American car (Nelson, 2021). The need for large amounts of rare earth minerals to create the computers and processing chips results in considerable environmental harm as well, not to mention some of these resources may be finite. Finally, the use of AI to combat climate change may deleteriously affect human populations. Extreme automation of processes would necessitate the replacement of tens of millions of jobs and low educated positions would be most susceptible to automation (Jackson, 2019). The Tyndall Centre for Climate Change Research has also reported that AI may target and place the burden of climate change action on developing nations, unfairly punishing countries that are not the biggest contributors to CO2 emissions (Nelson, 2021).
There exist two paths forward with AI automation to tackle global issues. Our first path is to fully embrace the sentient software and the second path is to scale back our integration of the AI to tackle such problems and seek other viable paths forward. It is hard not to consider the most just path forward to be using all the power of AI at our disposal to combat global issues such as climate change or world poverty. This approach may very well maximize human life across the world and halt oncoming disasters. However, it is important to consider how AI implementation would impact the welfare of individuals. Biases in the algorithms could disparately impact sectors of low education jobs, making them obsolete, or place the burden of developing sustainable energy on developing nations that are unable to do so. Relying on AI to solve problems in the long-term future while creating economic and social unrest is an insufficient approach.
Luckily, there are ongoing efforts to apply ethical decision making to AI integration on a global scale. The current “AI boom” is accompanied by calls for applied ethics, which are meant to guide the disruptive potential of AI (Hagendroff, 2020) and multiple establishments currently work to monitor AI implementation. One of these establishments, Partnership on AI is a global organization with the purpose of ensuring that AI is used with justice and fairness (Partnership on AI, 2022). Further, from 2016 to 2019 alone, 112 AI ethics documents were published from the public, private, and NGO sectors (Schiff et al., 2021). There seems to be substantial progress being made to ensure that AI is used ethically across the world. Acting on this issue with integrity requires honesty and transparency in the decision-making process. Governments and organizations planning to utilize the power of AI to deal with global issues must do so with integrity, considering the short-term downsides to human welfare while maximizing the long-term benefits to human life.
There are certainly many considerations with implementation of AI for tackling global issues. The cases enumerated above serve as the most extreme examples of our utilization of AI to achieve goals. But there exists ample middle ground between using AI to develop job descriptions (Halloran, 2017) and using AI to efficiently restructure energy grids and make decisions about green energy. The question then becomes, how are we to decide which jobs AI is capable of automating and which jobs may pose ethical concerns when managed by AI?
AI algorithms have previously been used to measure the creditworthiness of individuals. One of the most popular inputs used in these algorithms is an individual’s zip code. Including such a factor in a credit risk model can increase the chance of credit denial for individuals of a particular ethnicity or race, who live in clusters in our segregated cities (O’Neil, 2016). These automated credit risk models can likely perpetuate inequality as they over-rely on empirical data and do not harness the capability to ethically reason, as humans can. Take for example, AI algorithms used in talent management systems in organizations. Rotolo and colleagues (2018) highlight the shortcomings of AI algorithms used to identify candidates with high future potential. Big data allows more chances for spurious correlations between variables to be mistakenly identified as significant relationships. Perhaps a substantial number of the high potentials identified by the AI in an organization drive red cars. Humans can reason that there is no possible way that the color of the car an employee drives to work has bearing on their job performance, but current AI algorithms may reach a conclusion simply off statistical significance. However, human-level AI has a purported 90% chance of existing by 2075 (Müller & Bostrom, 2016) and such developments may allow AI to make more reasoned decisions required in the two cases presented above. Regardless of these predicted advances in the next fifty years, we must ask if AI should be used for automating professions which often necessitate the application of ethical middle theory to fully understand the context.
Professions are a particular type of occupation that require prolonged training, a formal education, and are more likely to deal with convoluted ethical dilemmas (Lefkowitz, 2017). Middle theory in ethics is the space between abstract moral theory and specific ethics cases (Uglietta, 2018). Moral situations are deeply ingrained in social structures and specific practices. Middle theory necessitates application of moral theory to situations but also a deep understanding of the profession the situation applies to. Morals and values may offer little guidance to situations encountered in professions unless we are able to use contingent rational agency (Uglietta, 2018). That is, to understand the ethics of a specific profession, it is necessary to understand both moral theory as well as the goals, values, and practices of that profession. It is not moral to cut another person with a knife, however, if that person is standing above a patient in an operating room in a hospital and happens to be a medical doctor, we may consider that knife cut to be the first step in saving the patient’s life.
It could be unethical or even dangerous to automate a profession or crucial aspects of that profession with AI that would require application of moral decisions based on unique situations. The rich social fabric in which these professions operate in our society is likely too complex for AI to ethically navigate. It is no question that AI is more efficient than humans at menial tasks, but currently we are leaps and bounds ahead at moral reasoning in complex professions. Returning to the example of AI identifying the creditworthiness of individuals, there is considerable middle ground between the moral responsibility to treat each case with justice and the goals, values, and practices central to the profession of accounting. The International Ethics Standards Board for Accountants puts forth a code of ethics for all professional accountants (Matani, 2015). One of the principles is due care in which all professionals have a responsibility to provide all their clients with competent service. AI may be biased to automatically reject any credit request from a zip code with low SES. Perhaps a more ethical way to treat such cases would be to look further into the customer’s file to assess if they are striving for success. Such indicators could possibly be stable employment, little to no criminal record, or any positive impressions that were made during meetings with the customer. These additional indicators reflect the middle theory between moral responsibility and the values of professional accountants.
A professional accountant should be able to take into these contextual variables when deciding about a customer’s status of ability to pay back a loan in full. This individual is from a poor zip code which means that they have a higher probability of forfeiting on their loan. The accountant has a moral responsibility to their employer to make such decisions that benefit the bottom line of the organization and taking this moral theory at face value, they should deny this customer credit. However, the accountant also notes that the customer’s salary was just increased at work and was extremely respectful during a brief meeting to discuss the credit. Perhaps in the context of the social fabric that the exchange takes place in, the accountant decides that the moral responsibility lies with the welfare of the customer who seems to be a competent individual. AI is not suited for such deliberations, and it is possible that the technology never will be.
Predictions from experts suggest that AI will only continue to supplant workers and become more central to our economy (Bughin et al., 2018; Müller & Bostrom, 2016). The outstanding applications of AI to every domain have enabled us to harness technology in ways never seen before, many of them enumerated above. It is difficult to not be coerced by the stories of success into believing that these intelligent machines are the silver bullet to countless problems. They may also be our Waterloo.
The endless application of AI brings forth the two ethical dilemmas discussed above. It is the author’s opinion that we do have a moral responsibility to maximize the power of AI to ameliorate the deleterious effects of global catastrophes such as climate change or poverty. However, we must first consider the welfare of those that may be negatively affected by automation of their jobs or biased outcomes that arise on the way to meet these goals. Moving forward with integrity and truthfulness is of the utmost importance. Private and public sectors should continue to put forth ethical recommendations and policies on AI to further elucidate the matter to the public. Additionally, if we are to maximize AI automation to efficiently achieve important goals and streamline organizational functions, we must ensure that AI is not used to automate professional practices. Professions, along with training and educational requirements, require individuals to deal with complex ethical situations. The current capabilities of AI do not allow a competent navigation of the middle theory required to make ethical decisions in professions. There exists too much opportunity to adversely harm individuals if professions or crucial aspects of professions are fully automated by AI. Harnessing the super intelligent powers of AI is a path that humankind is likely to follow, but along the way we must ensure we do not sacrifice welfare for expediency.
Bughin, J., Seong, J., Manyika, J., Chui, M., & Joshi, R. (2018). Notes from the AI frontier: Modeling the impact of AI on the world economy. McKinsey Global Institute.
Campbell, M., Hoane Jr, A. J., & Hsu, F. H. (2002). Deep blue. Artificial intelligence, 134(1-2), 57-83. Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99-120.
Halloran, T. (2017). How Atlassian went from 10% female technical graduates to 57% in two years. Textio. Retrieved 14 May 2022, from https://textio.com/blog/how-atlassian-went-from-10-female-technical-graduates-to-57-in-two-years/13035166507.
How AI can help reduce inequalities. Hello Future. (2022). Retrieved 14 May 2022, from https://hellofuture.orange.com/en/how-ai-can-help-reduce-inequalities/.
Jackson, P. C. (2019). Introduction to artificial intelligence. Courier Dover Publications.
Joppa, L., & Herweijer, C. (2015). How AI Can Enable a Sustainable Future. Microsoft.
Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15-25. https://doi.org/10.1016/j.bushor.2018.08.004
Klebnikov, S. (2019). Stopping Global Warming Will Cost $50 Trillion: Morgan Stanley Report. Forbes. Retrieved 18 May 2022, from https://www.forbes.com/sites/sergeiklebnikov/2019/10/24/stopping-global-warming-will-cost-50-trillion-morgan-stanley-report/?sh=6bef8c051e23.
Lefkowitz, J. (2017). Ethics and values in industrial-organizational psychology. Routledge.
Marr, B. (2022). 10 Wonderful Examples Of Using Artificial Intelligence (AI) For Good. Forbes. Retrieved 14 May 2022, from https://www.forbes.com/sites/bernardmarr/2020/06/22/10-wonderful-examples-of-using-artificial-intelligence-ai-for-good/?sh=2478d8e22f95.
Matani, N. (2015). Auditing assurance and ethics handbook 2015. Wiley.
Morley, J., Machado, C. C., Burr, C., Cowls, J., Joshi, I., Taddeo, M., & Floridi, L. (2020). The ethics of AI in health care: a mapping review. Social Science & Medicine, 260, 113172.
Muehlhauser, L., & Bostrom, N. (2014). Why we need friendly AI. Think, 13(36), 41-47.
Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555-572). Springer, Cham.
Nelson, A. (2021). Here’s how AI can help fight climate change. World Economic Forum. Retrieved 18 May 2022, from https://www.weforum.org/agenda/2021/08/how-ai-can-fight-climate-change/.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
Partnership on AI. Partnership on AI. (2022). Retrieved 18 May 2022, from https://partnershiponai.org/.
Rotolo, C. T., Church, A. H., Adler, S., Smither, J. W., Colquitt, A. L., Shull, A. C., … & Foster, G. (2018). Putting an end to bad talent management: A call to action for the field of industrial and organizational psychology. Industrial and Organizational Psychology, 11(2), 176-219.
Schiff, D., Borenstein, J., Biddle, J., & Laas, K. (2021). AI ethics in the public, private, and NGO sectors: a review of a global document collection. IEEE Transactions on Technology and Society, 2(1), 31-42.
Schwab, K. (2016). The Fourth Industrial Revolution: What it Means, How to Respond.
Uglietta, J. (2019). Middle Theory in Professional Ethics. Teaching Ethics.
United Nations. (2015). Transforming our world: the 2030 Agenda for Sustainable Development. New York: United Nations.
Wetsman, N. (2022). Here’s how an algorithm guides a medical decision. The Verge. Retrieved 14 May 2022, from https://www.theverge.com/c/22927811/medical-algorithm-explainer-sepsis-risk-watch.